Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Synthetic data is emerging as a powerful tool in computer vision, offering advantages in privacy and security. As generative AI models advance, they enable the creation of large-scale, diverse datasets that eliminate concerns related to sensitive data sharing and costly data collection processes. However, fundamental questions arise: (1) can synthetic data replace natural data in a continual learning (CL) setting? How much synthetic data is sufficient to achieve a desired performance? How well is the network generalizable when trained on synthetic data? To address these questions, we propose a sample minimization strategy for CL that enhances efficiency, generalization, and robustness by selectively removing uninformative or redundant samples during the training phase. We apply this method in a sequence of tasks derived from the GenImage dataset [35]. This setting allows us to compare the impact of training early tasks entirely on synthetic data to analyze how well they transfer knowledge to subsequent tasks or for evaluation on natural images. Furthermore, our method allows us to investigate the impact of removing potentially incorrect, redundant, or harmful training samples. We aim to maximize CL efficiency by removing uninformative images and enhance robustness through adversarial training and data removal. We study how the training order of synthetic and natural data, and what generative models are used, impact CL performance maximization and the natural data minimization. Our findings provide key insights into how generative examples can be used for adaptive, efficient CL in evolving environments.more » « lessFree, publicly-accessible full text available May 6, 2026
-
We consider the development of practical stochastic quasi-Newton, and in particular Kronecker-factored block diagonal BFGS and L-BFGS methods, for training deep neural networks (DNNs). In DNN training, the number of variables and components of the gradient n is often of the order of tens of millions and the Hessian has n^ 2 elements. Consequently, computing and storing a full n times n BFGS approximation or storing a modest number of (step, change in gradient) vector pairs for use in an L-BFGS implementation is out of the question. In our proposed methods, we approximate the Hessian by a block-diagonal matrix and use the structure of the gradient and Hessian to further approximate these blocks, each of which corresponds to a layer, as the Kronecker product of two much smaller matrices. This is analogous to the approach in KFAC, which computes a Kronecker-factored block diagonal approximation to the Fisher matrix in a stochastic natural gradient method. Because the indefinite and highly variable nature of the Hessian in a DNN, we also propose a new damping approach to keep the upper as well as the lower bounds of the BFGS and L-BFGS approximations bounded. In tests on autoencoder feed-forward network models with either nine or thirteen layers applied to three datasets, our methods outperformed or performed comparably to KFAC and state-of-the-art first-order stochastic methods.more » « less
-
We consider distributed optimization under communication constraints for training deep learning models. We propose a new algorithm, whose parameter updates rely on two forces: a regular gradient step, and a corrective direction dictated by the currently best-performing worker (leader). Our method differs from the parameter-averaging scheme EASGD in a number of ways:(i) our objective formulation does not change the location of stationary points compared to the original optimization problem;(ii) we avoid convergence decelerations caused by pulling local workers descending to different local minima to each other (ie to the average of their parameters);(iii) our update by design breaks the curse of symmetry (the phenomenon of being trapped in poorly generalizing sub-optimal solutions in symmetric non-convex landscapes); and (iv) our approach is more communication efficient since it broadcasts only parameters of the leader rather than all workers. We provide theoretical analysis of the batch version of the proposed algorithm, which we call Leader Gradient Descent (LGD), and its stochastic variant (LSGD). Finally, we implement an asynchronous version of our algorithm and extend it to the multi-leader setting, where we form groups of workers, each represented by its own local leader (the best performer in a group), and update each worker with a corrective direction comprised of two attractive forces: one to the local, and one to the global leader (the best performer among all workers). The multi-leader setting is well-aligned with current hardware architecture, where local workers forming a group lie within a single computational node and …more » « less
An official website of the United States government

Full Text Available